We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP). At each time instant, a user contributes a sample, and the users can arrive in arbitrary order. Until now these requirements of continual release and user-level privacy were considered in isolation. But, in practice, both these requirements come together as the users often contribute data repeatedly and multiple queries are made. We provide an algorithm that outputs a mean estimate at every time instant $t$ such that the overall release is user-level $\varepsilon$-DP and has the following error guarantee: Denoting by $M_t$ the maximum number of samples contributed by a user, as long as $\tilde{\Omega}(1/\varepsilon)$ users have $M_t/2$ samples each, the error at time $t$ is $\tilde{O}(1/\sqrt{t}+\sqrt{M}_t/t\varepsilon)$. This is a universal error guarantee which is valid for all arrival patterns of the users. Furthermore, it (almost) matches the existing lower bounds for the single-release setting at all time instants when users have contributed equal number of samples.
translated by 谷歌翻译
The necessity of data driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a Healthcare Provider Facility or a Hospital (from here on termed as Facility) Market Share is of key importance. This pilot study aims at developing a data driven Machine Learning - Regression framework which aids strategists in formulating key decisions to improve the Facilitys Market Share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study; and the data spanning across 60 key Facilities in Washington State and about 3 years of historical data is considered. In the current analysis Market Share is termed as the ratio of facility encounters to the total encounters among the group of potential competitor facilities. The current study proposes a novel two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. The proposed method to identify pool of competitors in current analysis, develops Directed Acyclic Graphs (DAGs), feature level word vectors and evaluates the key connected components at facility level. This technique is robust since its data driven which minimizes the bias from empirical techniques. Post identifying the set of competitors among facilities, developed Regression model to predict the Market share. For relative quantification of features at a facility level, incorporated SHAP a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share.
translated by 谷歌翻译
与计算机视觉合并的基于无人机的遥感系统(UAV)遥感系统具有协助建筑物建设和灾难管理的潜力,例如地震期间的损害评估。可以通过检查来评估建筑物到地震的脆弱性,该检查考虑到相关组件的预期损害进展以及组件对结构系统性能的贡献。这些检查中的大多数是手动进行的,导致高利用人力,时间和成本。本文提出了一种通过基于无人机的图像数据收集和用于后处理的软件库来自动化这些检查的方法,该方法有助于估算地震结构参数。这里考虑的关键参数是相邻建筑物,建筑计划形状,建筑计划区域,屋顶上的对象和屋顶布局之间的距离。通过使用距离测量传感器以及通过Google Earth获得的数据进行的现场测量,可以验证所提出的方法在估计上述参数估算上述参数方面的准确性。可以从https://uvrsabi.github.io/访问其他详细信息和代码。
translated by 谷歌翻译
机器学习已成为包括运动在内的多个领域的工程设计和决策的组成部分。深度神经网络(DNNS)一直是预测职业体育赛事结果的最新方法。但是,除了对这些体育活动成果进行高度准确的预测外,还必须回答诸如“为什么模型预测A团队会赢得与B队的比赛?”之类的问题? DNN本质上是本质上的黑框。因此,需要为模型在运动中的预测提供高质量的可解释的解释性解释。本文探讨了两步可解释的人工智能(XAI)方法,以预测巴西排球联盟(Superliga)中比赛的结果。在第一阶段,我们直接使用可解释的基于规则的ML模型,这些模型可以根据布尔规则列的生成(BRCG;提取简单和 - 或分类规则)和逻辑回归(logReg;允许估算)对模型的行为进行全局理解。功能重要性得分)。在第二阶段,我们构建了非线性模型,例如支持向量机(SVM)和深神经网络(DNN),以在排球比赛的结果上获得预测性能。我们使用ProtoDash为每个数据实例构建了“事后”解释,该方法在训练数据集中找到原型,与测试实例最相似,而Shap是一种估计每个功能在模型预测中的贡献的方法。我们使用忠诚度量标准评估了摇摆的解释。我们的结果证明了对模型预测的解释的有效性。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
通过卫星摄像机获取关于地球表面的大面积的信息使我们能够看到远远超过我们在地面上看到的更多。这有助于我们在检测和监测土地使用模式,大气条件,森林覆盖和许多非上市方面的地区的物理特征。所获得的图像不仅跟踪连续的自然现象,而且对解决严重森林砍伐的全球挑战也至关重要。其中亚马逊盆地每年占最大份额。适当的数据分析将有助于利用可持续健康的氛围来限制对生态系统和生物多样性的不利影响。本报告旨在通过不同的机器学习和优越的深度学习模型用大气和各种陆地覆盖或土地使用亚马逊雨林的卫星图像芯片。评估是基于F2度量完成的,而用于损耗函数,我们都有S形跨熵以及Softmax交叉熵。在使用预先训练的ImageNet架构中仅提取功能之后,图像被间接馈送到机器学习分类器。鉴于深度学习模型,通过传输学习使用微调Imagenet预训练模型的集合。到目前为止,我们的最佳分数与F2度量为0.927。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
在大量人员中,在线社交媒体(OSMS)消费的广泛上升构成了遏制这些平台上仇恨内容的传播的关键问题。随着多种语言的效果越来越多,检测和表征仇恨的任务变得更加复杂。代码混合文本的微妙变化以及切换脚本仅增加了复杂性。本文介绍了哈索克2021多语种推特仇恨语音检测挑战的解决方案,由Team Precog IIIT Hyderabad。我们采用基于多语言变压器的方法,并为所有6个子任务描述了我们的架构作为挑战的一部分。在参加所有子特设券的6支球队中,我们的提交总体排名第3。
translated by 谷歌翻译
Covid-19疫苗是我们最好的赌注,用于减轻大流行的持续冲击。但是,疫苗也预计将是有限的资源。最佳分配策略,特别是在具有访问不公平的国家和热点的时间分离,可能是停留疾病传播的有效方式。我们通过提出一种新的管道VACSIM来实现这个问题,将深度加强学习模型延装到用于优化Covid-19疫苗的分布的上下文的匪徒方法中。虽然加强学习模型建议了更好的行动和奖励,但上下文匪徒允许在现实世界场景中每天到日常实施的在线修改。我们评估此框架,防止与印度五个不同状态的Covid-19案例发生比例分配疫苗的天真分配方法(Assam,Delhi,Jharkhand,Maharashtra和Nagaland),并展示高达9039潜力的潜在感染,并增加了显着增加在通过VacSim方法的45天内限制差异的疗效。我们的型号和平台对印度所有国家和潜在的全球范围内都是可扩张的。我们还提出了新的评估策略,包括标准的基于区间模型的预测和对我们模型的因果关系评估。由于所有模型都携带可能需要在各种情况下进行测试的假设,因此我们开源我们的模型Vackim并贡献了与Openai健身房兼容的新型加固学习环境,以使其在全球的现实世界应用中可扩展。 (http://vacsim.tavlab.iiitd.edu.in:8000/)。
translated by 谷歌翻译
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples. This poses a challenge for generalisation on such labels, and also makes naïve learning biased towards dominant labels. In this paper, we present two simple modifications of standard softmax cross-entropy training to cope with these challenges. Our techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training. Such adjustment encourages a large relative margin between logits of rare versus dominant labels. These techniques unify and generalise several recent proposals in the literature, while possessing firmer statistical grounding and empirical performance. A reference implementation of our methods is available at: https://github.com/google-research/google-research/tree/master/logit_adjustment.Recently, long-tail learning has received renewed interest in the context of neural networks. Two active strands of work involve post-hoc normalisation of the classification weights [
translated by 谷歌翻译